43 research outputs found

    iFlask: Isolate flask security system from dangerous execution environment by using ARM TrustZone

    Get PDF
    Security is essential in mobile computing. And, therefore, various access control modules have been introduced. However, the complicated mobile runtime environment may directly impact on the integrity of these security modules, or even compels them to make wrong access control decisions. Therefore, for a trusted Flask based security system, it needs to be isolated from the dangerous mobile execution environment at runtime. In this paper, we propose an isolated Flask security architecture called iFlask to solve this problem for the Flask-based mandatory access control (MAC) system. iFlask puts its security server subsystem into the enclave provided by the ARM TrustZone so as to avert the negative impacts of the malicious environment. In the meanwhile, iFlask’s object manager subsystems which run in the mobile system kernel use a built-in supplicant proxy to effectively lookup policy decisions made by the back-end security server residing in the enclave, and to enforce these rules on the system with trustworthy behaviors. Moreover, to protect iFlask’s components which are not protected by the enclave, we not only provide an exception trap mechanism that enables TrustZone to enlarge its protection scope to protect selected memory regions from the malicious system, but also establish a secure communication channel to the enclave as well. The prototype is implemented on SELinux, which is the widely used Flask-based MAC system, and the base of SEAndroid. The experimental results show that SELinux receives reliable protection, because it resists all known vulnerabilities (e.g., CVE-2015-1815) and remains unaffected by the attacks in the test set. The propose architecture have very slight impact on the performance, it shows a performance degradation ranges between 0.53% to 6.49% compared to the naked system

    Learning RGB-D Salient Object Detection using background enclosure, depth contrast, and top-down features

    Full text link
    Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong performance on RGB salient object detection. Although, depth information can help improve detection results, the exploration of CNNs for RGB-D salient object detection remains limited. Here we propose a novel deep CNN architecture for RGB-D salient object detection that exploits high-level, mid-level, and low level features. Further, we present novel depth features that capture the ideas of background enclosure and depth contrast that are suitable for a learned approach. We show improved results compared to state-of-the-art RGB-D salient object detection methods. We also show that the low-level and mid-level depth features both contribute to improvements in the results. Especially, F-Score of our method is 0.848 on RGBD1000 dataset, which is 10.7% better than the second place

    Automatic Generation of Grounded Visual Questions

    Full text link
    In this paper, we propose the first model to be able to generate visually grounded questions with diverse types for a single image. Visual question generation is an emerging topic which aims to ask questions in natural language based on visual input. To the best of our knowledge, it lacks automatic methods to generate meaningful questions with various types for the same visual input. To circumvent the problem, we propose a model that automatically generates visually grounded questions with varying types. Our model takes as input both images and the captions generated by a dense caption model, samples the most probable question types, and generates the questions in sequel. The experimental results on two real world datasets show that our model outperforms the strongest baseline in terms of both correctness and diversity with a wide margin.Comment: VQ

    GTAV-NightRain: Photometric Realistic Large-scale Dataset for Night-time Rain Streak Removal

    Full text link
    Rain is transparent, which reflects and refracts light in the scene to the camera. In outdoor vision, rain, especially rain streaks degrade visibility and therefore need to be removed. In existing rain streak removal datasets, although density, scale, direction and intensity have been considered, transparency is not fully taken into account. This problem is particularly serious in night scenes, where the appearance of rain largely depends on the interaction with scene illuminations and changes drastically on different positions within the image. This is problematic, because unrealistic dataset causes serious domain bias. In this paper, we propose GTAV-NightRain dataset, which is a large-scale synthetic night-time rain streak removal dataset. Unlike existing datasets, by using 3D computer graphic platform (namely GTA V), we are allowed to infer the three dimensional interaction between rain and illuminations, which insures the photometric realness. Current release of the dataset contains 12,860 HD rainy images and 1,286 corresponding HD ground truth images in diversified night scenes. A systematic benchmark and analysis are provided along with the dataset to inspire further research

    Learning to Dehaze from Realistic Scene with A Fast Physics-based Dehazing Network

    Full text link
    Dehazing is a popular computer vision topic for long. A real-time dehazing method with reliable performance is highly desired for many applications such as autonomous driving. While recent learning-based methods require datasets containing pairs of hazy images and clean ground truth references, it is generally impossible to capture accurate ground truth in real scenes. Many existing works compromise this difficulty to generate hazy images by rendering the haze from depth on common RGBD datasets using the haze imaging model. However, there is still a gap between the synthetic datasets and real hazy images as large datasets with high-quality depth are mostly indoor and depth maps for outdoor are imprecise. In this paper, we complement the existing datasets with a new, large, and diverse dehazing dataset containing real outdoor scenes from High-Definition (HD) 3D movies. We select a large number of high-quality frames of real outdoor scenes and render haze on them using depth from stereo. Our dataset is more realistic than existing ones and we demonstrate that using this dataset greatly improves the dehazing performance on real scenes. In addition to the dataset, we also propose a light and reliable dehazing network inspired by the physics model. Our approach outperforms other methods by a large margin and becomes the new state-of-the-art method. Moreover, the light-weight design of the network enables our method to run at a real-time speed, which is much faster than other baseline methods
    corecore